8 research outputs found

    Model-Free Learning of Two-Stage Beamformers for Passive IRS-Aided Network Design

    Get PDF
    Electronically tunable metasurfaces, or Intelligent Reflecting Surfaces (IRSs), are a popular technology for achieving high spectral efficiency in modern wireless systems by shaping channels using a multitude of tunable passive reflecting elements. Capitalizing on key practical limitations of IRS-aided beamforming pertaining to system modeling and channel sensing/estimation, we propose a novel, fully data-driven Zeroth-order Stochastic Gradient Ascent (ZoSGA) algorithm for general two-stage (i.e., short/long-term), fully-passive IRS-aided stochastic utility maximization. ZoSGA learns long-term optimal IRS beamformers jointly with short-term optimal precoders (e.g., WMMSE-based) via minimal zeroth-order reinforcement and in a strictly model-free fashion, relying solely on the effective compound channels observed at the terminals, while being independent of channel models or network/IRS configurations. Another remarkable feature of ZoSGA is being amenable to analysis, enabling us to establish a state-of-the-art (SOTA) convergence rate of the order of O Ο(√SÏ”-4) under minimal assumptions, where S is the total number of IRS elements, and Ï” is a desired suboptimality target. Our numerical results on a standard MISO downlink IRS-aided sumrate maximization setting establish SOTA empirical behavior of ZoSGA as well, consistently and substantially outperforming standard fully model-based baselines. Lastly, we demonstrate that ZoSGA can in fact operate in the field, by directly optimizing the capacitances of a varactor-based electromagnetic IRS model (unknown to ZoSGA) on a multiple user/IRS, link-dense network setting, with essentially no computational overheads or performance degradation

    Model-Free Learning of Optimal Beamformers for Passive IRS-Assisted Sumrate Maximization

    Full text link
    Although Intelligent Reflective Surfaces (IRSs) are a cost-effective technology promising high spectral efficiency in future wireless networks, obtaining optimal IRS beamformers is a challenging problem with several practical limitations. Assuming fully-passive, sensing-free IRS operation, we introduce a new data-driven Zeroth-order Stochastic Gradient Ascent (ZoSGA) algorithm for sumrate optimization in an IRS-aided downlink setting. ZoSGA does not require access to channel model or network structure information, and enables learning of optimal long-term IRS beamformers jointly with standard short-term precoding, based only on conventional effective channel state information. Supported by state-of-the-art (SOTA) convergence analysis, detailed simulations confirm that ZoSGA exhibits SOTA empirical behavior as well, consistently outperforming standard fully model-based baselines, in a variety of scenarios

    A New Preconditioning Approachfor an Interior Point–Proximal Method of Multipliers for Linear and Convex Quadratic Programming

    Get PDF
    In this paper, we address the efficient numerical solution of linear and quadratic programming problems, often of large scale. With this aim, we devise an infeasible interior point method, blended with the proximal method of multipliers, which in turn results in a primal-dual regularized interior point method. Application of this method gives rise to a sequence of increasingly ill-conditioned linear systems which cannot always be solved by factorization methods, due to memory and CPU time restrictions. We propose a novel preconditioning strategy which is based on a suitable sparsification of the normal equations matrix in the linear case, and also constitutes the foundation of a block-diagonal preconditioner to accelerate MINRES for linear systems arising from the solution of general quadratic programming problems. Numerical results for a range of test problems demonstrate the robustness of the proposed preconditioning strategy, together with its ability to solve linear systems of very large dimension

    Strong Duality Relations in Nonconvex Risk-Constrained Learning

    No full text
    We establish strong duality relations for functional two-step compositional risk-constrained learning problems with multiple nonconvex loss functions and/or learning constraints, regardless of nonconvexity and under a minimal set of technical assumptions. Our results in particular imply zero duality gaps within the class of problems under study, both extending and improving on the state of the art in (risk-neutral) constrained learning. More specifically, we consider risk objectives/constraints which involve real-valued convex and positively homogeneous risk measures admitting dual representations with bounded risk envelopes, generalizing expectations and including popular examples, such as the conditional value-at-risk (CVaR), the mean-absolute deviation (MAD), and more generally all real-valued coherent risk measures on integrable losses as special cases. Our results are based on recent advances in risk-constrained nonconvex programming in infinite dimensions, which rely on a remarkable new application of J. J. Uhl’s convexity theorem, which is an extension of A. A. Lyapunov’s convexity theorem for general, infinite dimensional Banach spaces. By specializing to the risk-neutral setting, we demonstrate, for the first time, that constrained classification and regression can be treated under a unifying lens, while dispensing certain restrictive assumptions enforced in the current literature, yielding a new state-of-the-art strong duality framework for nonconvex constrained learning

    Strong Duality Relations in Nonconvex Risk-Constrained Learning

    No full text
    We establish strong duality relations for functional two-step compositional risk-constrained learning problems with multiple nonconvex loss functions and/or learning constraints, regardless of nonconvexity and under a minimal set of technical assumptions. Our results in particular imply zero duality gaps within the class of problems under study, both extending and improving on the state of the art in (risk-neutral) constrained learning. More specifically, we consider risk objectives/constraints which involve real-valued convex and positively homogeneous risk measures admitting dual representations with bounded risk envelopes, generalizing expectations and including popular examples, such as the conditional value-at-risk (CVaR), the mean-absolute deviation (MAD), and more generally all real-valued coherent risk measures on integrable losses as special cases. Our results are based on recent advances in risk-constrained nonconvex programming in infinite dimensions, which rely on a remarkable new application of J. J. Uhl’s convexity theorem, which is an extension of A. A. Lyapunov’s convexity theorem for general, infinite dimensional Banach spaces. By specializing to the risk-neutral setting, we demonstrate, for the first time, that constrained classification and regression can be treated under a unifying lens, while dispensing certain restrictive assumptions enforced in the current literature, yielding a new state-of-the-art strong duality framework for nonconvex constrained learning

    zosga-irs

    No full text
    Code for the Paper "Model-Free Learning of Optimal Beamformers for Passive IRS-Assisted Sumrate Maximization

    zosga-irs

    No full text
    Code for the Paper "Model-Free Learning of Optimal Beamformers for Passive IRS-Assisted Sumrate Maximization

    Sparse Approximations with Interior Point Methods (to appear in December 2022)

    No full text
    Large-scale optimization problems that seek sparse solutions have become ubiquitous. They are routinely solved with various specialized first-order methods. Although such methods are often fast, they usually struggle with not-so-well conditioned problems. In this paper, specialized variants of an interior point-proximal method of multipliers are proposed and analyzed for problems of this class. Computational experience on a variety of problems, namely, multi-period portfolio optimization, classification of data coming from functional Magnetic Resonance Imaging, restoration of images corrupted by Poisson noise, and classification via regularized logistic regression, provides substantial evidence that interior point methods, equipped with suitable linear algebra, can offer a noticeable advantage over first-order approaches
    corecore